Unifying Planar and Point Mapping in Monocular SLAM
نویسندگان
چکیده
Planar features in filter-based Visual SLAM systems require an initialisation stage that delays their use within the estimation. In this stage, surface and pose are initialised either by using an already generated map of point features [2, 3] or by using visual clues from frames [4]. This delay is unsatisfactory specially in scenarios where the camera moves rapidly such that visual features are observed for a very limited period. In this paper we present a unified approach to mapping in which points and planes are initialised alongside each other within the same framework. The best structure emerges according to what the camera observes, thus avoiding delayed initialisation for planar features. To do this we use a similar parameterisation to the one used for planar features in [3, 4]. The Inverse Depth Planar Parameterisation (IDPP), as we call it, is used to represent both planes and points. This IDPP is also combined with a point based measurement model where the planar constraint is introduced. The latter allows us to estimate and grow a planar structure if suitable, or to estimate a 3-D point if visual measurements do not support the constraint. The IDPP contains three main components: (1) A reference camera (RC); (2) the depth w.r.t. the RC of a seed 3-D point on the plane; (3) the normal of the plane.
منابع مشابه
Appearance Based Extraction of Planar Structure in Monocular SLAM
This paper concerns the building of enhanced scene maps during real-time monocular SLAM. Specifically, we present a novel algorithm for detecting and estimating planar structure in a scene based on both geometric and appearance and information. We adopt a hypothesis testing framework, in which the validity of planar patches within a triangulation of the point based scene map are assessed agains...
متن کاملSensor Fusion of Monocular Cameras and Laser Rangefinders for Line-Based Simultaneous Localization and Mapping (SLAM) Tasks in Autonomous Mobile Robots
This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any moment...
متن کاملEfficiently Increasing Map Density in Visual SLAM Using Planar Features with Adaptive Measurement
The visual simultaneous localisation and mapping (SLAM) systems now in widespread use are based on localised point features [2, 4, 5]. Although effective in many respects, the approach has limitations when considering the density and efficiency of map representation. With a dense population of features, camera tracking can be robust, able to withstand significant occlusion and large changes in ...
متن کاملSLAM-Safe Planner: Preventing Monocular SLAM Failure using Reinforcement Learning
Automating Monocular SLAM is challenging as routine trajectory planning frameworks tend to fail primarily due to the inherent tendency of Monocular SLAM systems to break down or deviate largely from their actual trajectory and map states. The reasons for such breakages or large deviations in trajectory estimates are manyfold, ranging from degeneracies associated with planar scenes, with large c...
متن کاملIntegrating Monocular Vision and Odometry for SLAM
This paper presents an approach to Simultaneous Localization and Mapping (SLAM) based on monocular vision. Standard multiple-view vision techniques are used to estimate robot motion and scene structure, which are then integrated with minimal odometric information and used to build a global environment map. Preliminary experimental results are also presented and discussed. Key-Words: Robot local...
متن کامل